home

Editorial
Today's News
News Archives
On-line Articles
Current Issue
Magazine Archives
Subscribe to ISD


Directories:
Vendor Guide 2001
Advertiser Index
Web Directory
Literature Guide
Event Calendar


Resources:
Resources and Seminars
Special Sections
High-tech Job Search


Information:
2001 Media Kit
About isdmag.com
Writers Wanted!
Search isdmag.com
Contact Us





IP Supply Chain

By George Janac, Trent Poltronetti, Aidan Herbert, and Daryl RuDusky
Posted  03/01/01, 01:57:39 PM EDT

As the era of system-on-a-chip technology emerges, there's no doubt that change, both evolutionary and revolutionary, will be required to bring these huge, multi-functional chips to fruition. Enhanced partitioning, heirarchial support, and platform-based integrators are among the new tools and funtions needed for these designs. Among the new mindsets needed for designers-an abandonment of NIH ("not invented here") and an embracing of design-for-reuse, if the new paradigm is to meet it's date with destiny. The following industry commentary discusses various aspects of the intellectual property (IP) supply chain that may prove to be the backbone of the SOC roadmap. Admittedly, some of the details are mundane-housekeeping always is. But, some of it is truly visionary. And, as in all things, both ends of the spectrum are necessary. The IP supply chain will only be as strong as its weakest link.


Part I: The Trusted Chain

by George Janac, InTime Software

What is the future of the IP supplier/designer relationship? To answer the question requires an understanding of where the relationship stands today, for good or bad. The web has enabled many pieces of information to be accessible from the engineer's desktop-in contrast to having to get on the phone with distributors and suppliers to find out what is available. So, the good news is that the amount of information available prior to picking up the phone has increased. The bad news, however, is that the information is principally limited to data sheets. The missing link is the requisite information on licensing models and data useful for incorporation into project tools. That, we believe, is the future of the IP supplier/designer relationship.

Usefulness of the web, as it applies to IP, should be measured as it is for everything else on the web-by the number of clicks it takes to get to information, how much information you actually get, and the quality of that information before you have to pick up the phone. So how are we doing today? Web sites like SiliconX, Design-Reuse, and so forth all provide a useful "catalog" of IP and services. They provide a means of finding whom you can call about a piece of IP. Their search engines help you narrow down the scope, saving time. These sites point designers at data sheets and provide contact information only.

Why don't we have EDA data and pricing models on the web? In main, because of security and business-model confidentiality issues. If an IP provider publishes RTL source code for customers to map the design to a technology, they are giving away their actual IP. This will not work. Pricing on the web enables competitors to see models and undercut them. Therefore, it's easy to understand the general unwillingness to openly publish EDA data and pricing models on the web. But, it also points to the shortcomings of today's approaches, which can be best described as public publishing of information.

Clearly, the future is in private, secure, designer-to-vendor interaction systems.

Most other industries have moved toward trusted exchange, or to custom-supply chains. Companies that design products set up partner networks with their suppliers. The limited access of these networks provides for a higher level of trust. The IP supply chain needs to move in this direction as well, if we actually hope to increase IP usage.

When an IP supplier makes repeated sales to a customer, a trusted bond develops. The supplier and customer need a mechanism to communicate privately. Once a mechanism is established in a partnership, they can then decide on what level of information they are willing to share between themselves.

A supplier may be willing to give automatic quotes, as well as engineering data, to a large and trusted customer. At the same time, a new customer may only have access to datasheets. The key is in allowing a customer to configure a supply chain with trusted suppliers. In turn, IP suppliers participating in these networks need to be able to set up and control the level of trust individually for each customer.

The trusted supply network then becomes more a machine-to-machine system then a "pick up the phone" system. Why does this make sense? Because, it already exists today. IP suppliers have dedicated sales and support people calling on their largest customers. These customers can get anything they want from their suppliers. So, this is really a human-trusted supply chain. Making such a chain work over the web just improves efficiency.

Even given that this infrastructure can be established, problems will still exist. Security of information will continue to be key, even in a trusted relationship. Having a customer receive three pieces of RTL source from three trusted vendors for evaluation is a risk. Three vendors have just exposed their full intellectual property while, in the end, only one or none will get the business. In either case, the non-chosen ones will feel like they may have been cheated. Models have to be developed that allow users to do evaluations, but at the same time won't compromise vendor IP.

Secure, soft IP is a major issue to be overcome by EDA suppliers. For the supply chain to work transparently, we need a way to distribute IP, which can be mapped onto a library and process without compromising the security of the original RTL. Today we have symbols, physical abstracts, and encrypted simulation models that don't compromise the original IP. So, hard IP can be securely distributed. What we need is the equivalent of the encrypted simulation model for timing analysis and physical implementation. The industry needs an encrypted RTL, which can be evaluated on a library for both timing and layout, allowing creation of timing stamp and physical abstract models by users. By restricting the user from seeing the original RTL, while producing a final gate-level netlist or layout, evaluations can be done securely. This development would significantly increase the willingness of the IP suppliers to give out IP models. Evaluation without intellectual property compromise is the key to a healthy IP supply chain.

While the above may appear alien to semiconductor suppliers, IP suppliers, and even customers, it's really the model followed by many other industries. Don't panic: the phone hasn't yet been made obsolete. You can just get a lot more done before you have to pick it up and negotiate.


Part II: Streamlining the System-Chip Supply Chain

by Trent Poltronetti, Synchronicity, Inc.

Creating a sustainable competitive advantage in today's advanced system-chip market requires constant scrutiny of development practices.

Increasingly, organizations are realizing that augmenting design resources by such means as outsourcing, collaborative development, and the use of pre-designed content is essential for timely completion of complex, differentiated ICs. With this trend toward an out-sourced resource model, high-tech organizations are now finding greater return on efforts to optimize their development supply chain (DSC) versus its predecessor, the manufacturing supply chain (MSC).

The electronics industry is capitalizing on the DSC as a competitive weapon, particularly as the strategic value of extending resources becomes apparent. Reaching outside of an existing organization to increase development manpower, expertise, or acquire design content provides the agility necessary for survival in today's marketplace.

Traditionally, organizations focused on the MSC to find production efficiencies, reduce cycle time, and improve quality. Central to MSC optimization was information technology (IT) that enabled management capabilities such as the coordination of just-in-time delivery with internal work-in-progress. Just by effectively managing work and material flows within and without the organization, companies found that they could sustain a competitive advantage.

In many ways, the DSC is very similar to its manufacturing predecessor. The goal of the DSC is to minimize development cycle time while assuring acceptable quality. Both types of supply chain seek to manage the flow of resources between outside suppliers, partners, and internal development groups. The task of the DSC is made arguably easier by the fact that almost all resources being managed and exchanged between parties are electronic data, as opposed to the physical materials traversing the MSC. The proliferation of the Internet provides a proven and ready mechanism for interchange of electronic goods, but raises several new issues.

Tackling the challenges A single design group seeking to increase the number of designers and manage their interaction faces a base level of complexity, but as the organization extends to encompass multiple, geographically dispersed groups, more factors come into play. When participants outside the company, such as external contractors or partner companies enter the equation, still more factors must be managed. Likewise, DSC complexity increases even further when design content is drawn from IP reuse repositories or from outside sources.

The essential elements of a design management system for a single large development group include: centralized storage, pervasive data availability, tight team-wide communications, bug tracking, design lifecycle tracking, and management reporting. A web-based central hub provides an ideal foundation for design data storage, access and inter-group communications, as the web is a pervasive, proven network that is familiar to a wide range of users. More importantly standard Internet protocols are one of the few things that system administrators are comfortable letting pass through the firewall.

A state-machine-driven management system provides immediate access to design information in real time. Within such a system, automatic event detection and response capabilities can dramatically speed development by monitoring design changes and immediately responding via pre-defined actions that vary from e-mail notification to launching a simulation. Important version control and release management capabilities track changes made to every design file and rapidly propagate or report those changes. The management system must also recognize complex data types, and formats tie bug reports to specific file versions.

Finally, informed decision making is supported by real-time management reporting such as status updates, project histories, and trends analysis.

Multiple collaboration

Often, increasing the scope of a design team also means spreading the team across multiple locations in multiple time zones. With such dispersion comes the benefits of increased manpower, leveraged expertise of diverse groups and productivity optimization through 24-hour development. This also extends development management considerations beyond those of the single large design group to include inter-site communications, security of electronically transmitted data and time-zone differences.

Keeping remote groups synchronized requires zero-latency communications so that anyone, anywhere can instantly see the current state of the entire design and understand open issues. At the same time, the fact that communications and highly proprietary design data are traversing a wide network makes security an important consideration. Encryption, such as 128-bit SSL, is thus a vital aspect of any multi-site project management system.

Philips Semiconductor recently adopted a new system to manage parallel development across its multiple hardware development groups. Acknowledging commonality between language-based and full-custom development, Philips employed the latest in design and project management technology to coordinate multi-site teams. This more sophisticated design management system enabled designers to work in parallel across multiple sites and improved Philips' time to market on advanced products.

As development costs rise for semiconductor architectures like DSPs and processors, it is becoming increasingly common for companies to partner in 'co-opetition' to spread those costs. This highly strategic form of cooperation maximizes economies of scale, increases manpower, broadens expertise, and can increase productivity.

At the same time, it raises the need for fine-grained access control and standardized communications.

When Motorola and Lucent (now Agere) partnered to form StarCore for concurrent DSP development, they faced the challenge of collaborating efforts between multiple, globally dispersed locations within two different companies. Critical to this effort was a secure DSC management backbone that enabled distribution and sharing of DSP intellectual property over the Internet. Security through encryption and access control also made it possible for the two companies to share pertinent information without exposing the rest of the two companies' IP portfolios.

The most effective IP repository enables searches by categorized component hierarchy, summarized and tabulated search results, and side-by-side comparisons of similar components. Also helpful is shared knowledge about a given component in previous implementations, lifecycle states (level of IP maturity), audit trails, and threaded discussions.

In an effort to leverage existing resources and improve their ability to deliver more complex products to market more quickly, Hitachi last year undertook deployment of a design reuse infrastructure. Now Hitachi engineers worldwide can find, evaluate, and download pre-existing IC design blocks as well as design libraries. Deploying a reuse system that was compatible with Hitachi's design and project management system provided additional efficiencies. Hitachi teams around the globe now easily submit completed design blocks for reuse and have ready access to the entire IP repository through any web browser.

Another means of augmenting the DSC for the sake of productivity is external IP procurement. In recent years, the industry has witnessed the rapid proliferation of 3rd party IP suppliers and distributors. Leveraging this increasingly valuable resource in your DSC compounds considerations for IP reuse with the additional need for compatibility between reuse infrastructures and provider access mechanisms. Seamless interaction and compatibility with online public IP catalogs, like those from the VCX and Synopsys (the IP Catalyst Catalog), as well as Star IP from third-party vendors such as ARM and inSilicon, facilitates rapid evaluation and deployment of procured IP.

Successful system-chip development today requires sophisticated solutions that leverage resources within and between enterprises.

Managing the development supply chain that links these diverse resources is now a practical necessity for success. With the power of the web and advanced design and project management technology, such as that from Synchronicity, the benefits of effective DSC management are being realized by organizations at all levels of DSC complexity.


Part III: Platform Based Design and the IP Supply Chain

by Aidan Herbert, Aptix Corp.

The market has shown that IP reuse and block-based design techniques are more successful than a pure RTL-based approach. The important problems to be solved in this post-RTL era relate to integrating, exercising, characterizing, and tuning of systems assembled from virtual hardware components and embedded software components. RTL tools will continue to see incremental improvements. It's the block- and platform-based tools, however, that will drive the mega-functionality of very deep submicron designs

Exciting new electronic products result from a value chain, which encompasses processor IP, micro-peripheral IP, interconnect IP, I/O IP, software driver IP, protocol stacks, and RTOS. The challenge for EDA vendors is to provide tools and flows that accelerate and lubricate the IP supply chain. Functional verification is still a critical area. The verification gap has grown and is now a gulf, which covers three important areas:

- Traditional verification and standards compliance validation for IP creators

- Functional evaluation and characterization to support IP exchange

- System-level integration, validation, and performance tuning for the IP consumer

There are certainly changes afoot-at last year's DAC there was a host of system-level design tools and languages, including products from companies such as Co-Design, Cynapps, Arexsys, and Vast. The question is: Does the industry need a new language or is it more practical to upgrade the functional modeling approach at the front-end and to maintain the established RTL to gates, strategy at the backend?

IP reuse-the European influence

Meanwhile, IP reuse is changing the way we design. The success of the European cell phone makers illustrates the advantages of a system-level focus. The European platform-based approach to cell phones enables them to introduce derivative designs on a monthly basis. Broader adoption of platform-based approaches is well under way and being facilitated, in no small part, by the industry wide Virtual Socket Interface Alliance.

Platforms are a means of managing complexity. Design platforms include hardware, software, design methodologies, IP authoring /characterization standards, and functional verification strategies. The efficiency of the platform is related to the level of integration and interoperability of the varied elements. The platform's efficiency also comes from its application focus. Elements like IP, software, or verification techniques may be tuned to the specific needs of target application. The kernel elements of the platform can include a processor core, RTOS, a DSP core, standard interfaces, and bus. In addition to the kernel components, there may be embedded software components and a hardware virtual components library. Product delivery and the creation of derivative designs are streamlined when the expertise and tools to support platform exist.

Lubricating the IP supply chain

The attraction of IP is its "reuse without re-engineering" potential. This is a conceptual ideal, but the practical reality is that engineering bandwidth is consumed at each stage of the process: selection, evaluation, integration, and validation of IP. One cause of this is that we use low-level RTL tools to manage IP. Using RTL tools for IP integration and validation is a micro-level approach to a macro-level problem.

The industry recognizes this inefficiency and is currently spending money to circumvent it. Many vendors of core IP provide hardware evaluation boards (platforms) to demonstrate IP functionality and jumpstart software integration.

Evaluation boards are an expensive and unwieldy solution. The engineering effort to develop an evaluation board creates a large overhead in the IP industry. It can take up to a year to produce an evaluation board. Steps include: fabricate the IP, design and build an evaluation board, integrate tools, and develop a sample application. The IP vendor must then train staff and support the evaluation board.

A more practical approach is to use a hardware virtual prototype. This enables the IP vendor to share emulation ready models of the IP as soon as the RTL is stable. Using a standard web browser, partners and customers can remotely test drive IP and evaluate IP under conditions that can be customized to approximate their target application. The hardware virtual prototype provides the throughput to run meaningful evaluations. A hardware virtual prototype has the I/O flexibility to represent reality, either using the real-time interface or the high performance transaction interface. The hardware virtual prototype supports standard software and hardware tools, such as: software debuggers, ICE, RTL tools, and waveform viewers.

The IP value chain is geography independent. An effective modeling solution must address the remote access requirements and associated security requirements of the IP supply chain. Remote access is needed to facilitate remote IP evaluation and is also required to support co-design or system integration between geographically remote groups. Security is required to protect the IP of the vendor. This includes controlled access and secure transactions. The Aptix idea has a web enabled front-end with integrated SSL, PGP, and online electronic license manager. It also contains an integrated secure repository.

One deployment of the Aptix hardware virtual prototype solution is on the www.esocverify.com web site, a commercial repository for emulation-ready IP. Users can remotely browse the repository, select IP, and profile functionality in real-time. If the IP meets their requirements, they can obtain a two-part electronic license and download an emulation-ready model of the IP. The IP vendor grants the first part of the license, the certification is supplied at the website. Aptix certification means that: the emulation-ready model meets the functional constrains of the vendor supplied testbench; the advertised emulation throughput of the IP model is certified by Aptix; the advertised emulation resource requirements of the IP model are certified by Aptix.

Reuse without re-verification is high on the wish list of the IP consumer. Indicators show that this will not be a reality in the foreseeable future. However, IP and design reuse are changing the focus of functional verification and also enabling new levels of verification efficiency.

The focus of verification has shifted. It isn't necessary to verify the correctness of the design implementation of known, good IP blocks.

However, it is necessary to audit the functional capability of the imported block and validate compatibility with the rest of the system. This is not limited to validating the logical interface of IP blocks. Complete system-level validation is necessary. Obtuse corner cases, such as how a state machine deep in one block reacts to a stomped CRC, may have direct implications on how a FIFO in another block is cleared. Ad hoc verification strategies are unlikely to cover all cases of functional coupling between IP blocks, thus a more systematic approach is required.

Embedded device drivers represent the detailed intelligence need to control an IP block through its programmable interface. Device drivers are collections of discrete register operations organized into transactions. Transactions are quanta of device functionality. By means of a uniform C-API, device drivers provide access mechanisms to the intended transactional capability of the IP. Leveraging a device driver during verification allows the creation of clear concise tests consisting of a series of calls to the device driver API.

Consumer products have been the great market accelerator. However, the consumer is a tough customer who demands quality, reliability, and innovation. To sustain momentum and market potential, our industry needs to refine the design strategies that work and tirelessly innovate to replace strategies that do not work.


Part IV: IP and C-Based Systems Design

by Daryl RuDusky, Celoxica Inc.

In traditional IC design flows, designers rely on a hardware description language (HDL) such as Verilog or VHDL to build structural representations of circuits. Now, however, high-level C-based system design languages allow designers to work at the functional level and rely on sophisticated compilers to produce structural representations needed to build hardware. In this new design approach, third-party intellectual property (IP) will likely reach its potential for delivering productivity advantages beyond that seen for IP in current HDL-based design environments.

With its promise for dramatically compressing product development cycles, the notion of pre-built functionality delivered through IP blocks has attracted industry wide attention. Leading EDA vendors typically offer common functions as "commodity IP" blocks included in standard design libraries. In addition, designers can license "Star IP" blocks such as processor cores designed to fit in system-on-a-chip (SOC) designs.

Indeed, the attraction of IP is its potential to provide circuit designers with the ability to mix and match pre-built functions-allowing them to focus on their application with specialized functionality rather than having to spend time reinventing each function in their chip-level or system-level designs. IP cores enable engineers and companies to focus on areas that yield the greatest return on investment.

Yet, for all its promise, the IP industry has been slow to achieve its full potential primarily due to difficulties in using IP in complex design work. For high-integration designs such as SOCs, developers face a challenge in completing the required interfaces between the IP and the rest of their design. Rather than spending time designing commodity functions, they find themselves spending time integrating IP blocks into system designs. In an attempt to address this problem, industry groups such as the Virtual Socket Interface Alliance (VSIA) as well as individual IP providers have tried offering standard interface protocols intended to provide an interface roadmap for IP developers and a simplified integration task for IP users. Unfortunately, the inherent complications have resulted in little success through these approaches.

More recently, the emergence of C-based design methods has introduced an important new element in the IP supply chain. With these design methods, developers work at a higher level of abstraction-using C-based languages to describe functions at the algorithmic rather than structural level. Still, nearly all these methods require designers to complete work at the structural level by converting C-based code to HDL-level designs. Furthermore, after completing their functional designs using C-based descriptions, designers using those methods need to work with corresponding HDL-level representations of their designs to complete their work. The need to switch between these markedly distinct levels abrogates the advantages gained in using a high-level language early in design.

For example, the Handel-C compiler converts source code into an optimized representation that can be simulated or used to generate a netlist, allowing designers to take advantage of FPGA manufacturer conversion tools to produce FPGA-based hardware rapidly.

For IP consumers, C-based approaches offer an important new addition to the IP source stream. Now, developers have the option to acquire not only hard and soft IP, but also IP in this new form-one that promises to facilitate custom extensions and features in a manner that is much more difficult to attain at the HDL level. Furthermore, within an organization, developers can share Handel-C code for hardware elements with the same ease once reserved for software code.

Although future trends for C-based IP remain to be seen, past trends offer some intriguing hints of what's to come. The explosive growth of Java was fueled in part by the widespread availability of IP as Java classes-permitting software developers to build on third-party IP functions to deliver unique, value-added functionality. Similarly, the availability of pre-built web components such as Microsoft's Active X objects provided a rich foundation for Windows developers to create new applications and rapidly deploy them to test new markets for new applications at low risk. Now, the combination of IP and reconfigurable hardware programs such as Xilinx's Internet Reconfigurable Logic program offer intriguing possibilities for new types of product upgrade programs built on network-accessible IP.

While C-based IP may offer some exciting new possibilities in the future, this important new form of IP enriches today's IP supply chain with application-level expertise and high-level functionality. C-based IP promises to accelerate developers' ability to deliver increasingly feature-rich products despite tightening time-to-market windows.

   Print Print this story     e-mail Send as e-mail  

Sponsor Links

All material on this site Copyright © 2001 CMP Media Inc. All rights reserved.